68 research outputs found

    Synthesis of sup-interpretations: a survey

    Get PDF
    In this paper, we survey the complexity of distinct methods that allow the programmer to synthesize a sup-interpretation, a function providing an upper- bound on the size of the output values computed by a program. It consists in a static space analysis tool without consideration of the time consumption. Although clearly related, sup-interpretation is independent from termination since it only provides an upper bound on the terminating computations. First, we study some undecidable properties of sup-interpretations from a theoretical point of view. Next, we fix term rewriting systems as our computational model and we show that a sup-interpretation can be obtained through the use of a well-known termination technique, the polynomial interpretations. The drawback is that such a method only applies to total functions (strongly normalizing programs). To overcome this problem we also study sup-interpretations through the notion of quasi-interpretation. Quasi-interpretations also suffer from a drawback that lies in the subterm property. This property drastically restricts the shape of the considered functions. Again we overcome this problem by introducing a new notion of interpretations mainly based on the dependency pairs method. We study the decidability and complexity of the sup-interpretation synthesis problem for all these three tools over sets of polynomials. Finally, we take benefit of some previous works on termination and runtime complexity to infer sup-interpretations.Comment: (2012

    Complexity Information Flow in a Multi-threaded Imperative Language

    Get PDF
    We propose a type system to analyze the time consumed by multi-threaded imperative programs with a shared global memory, which delineates a class of safe multi-threaded programs. We demonstrate that a safe multi-threaded program runs in polynomial time if (i) it is strongly terminating wrt a non-deterministic scheduling policy or (ii) it terminates wrt a deterministic and quiet scheduling policy. As a consequence, we also characterize the set of polynomial time functions. The type system presented is based on the fundamental notion of data tiering, which is central in implicit computational complexity. It regulates the information flow in a computation. This aspect is interesting in that the type system bears a resemblance to typed based information flow analysis and notions of non-interference. As far as we know, this is the first characterization by a type system of polynomial time multi-threaded programs

    Theory of higher order interpretations and application to Basic Feasible Functions

    Get PDF
    Interpretation methods and their restrictions to polynomials have been deeply used to control the termination and complexity of first-order term rewrite systems. This paper extends interpretation methods to a pure higher order functional language. We develop a theory of higher order functions that is well-suited for the complexity analysis of this programming language. The interpretation domain is a complete lattice and, consequently, we express program interpretation in terms of a least fixpoint. As an application, by bounding interpretations by higher order polynomials, we characterize Basic Feasible Functions at any order

    A Type-Based Complexity Analysis of Object Oriented Programs

    Get PDF
    A type system is introduced for a generic Object Oriented programming language in order to infer resource upper bounds. A sound andcomplete characterization of the set of polynomial time computable functions is obtained. As a consequence, the heap-space and thestack-space requirements of typed programs are also bounded polynomially. This type system is inspired by previous works on ImplicitComputational Complexity, using tiering and non-interference techniques. The presented methodology has several advantages. First, itprovides explicit big OO polynomial upper bounds to the programmer, hence its use could allow the programmer to avoid memory errors.Second, type checking is decidable in polynomial time. Last, it has a good expressivity since it analyzes most object oriented featureslike inheritance, overload, override and recursion. Moreover it can deal with loops guarded by objects and can also be extended tostatements that alter the control flow like break or return.Comment: Information and Computation, Elsevier, A Para\^itre, pp.6

    Bounding Reactions in the Pi-calculus using Interpretations

    Get PDF
    Resource control ; concurrency ; interpretation methodsInternational audienceWe present a new resource static analysis for the pi-calculus that provides upper bounds on the number of reactions that might occur at runtime for a given process. This work is complementary to previous results on termination of processes by capturing strictly more processes, since it captures all the strongly normalizing processes, and by providing precise upper bounds on the number of communications on each channel. For that purpose, it combines interpretation methods, inspired by polynomial interpretations introduced in order to study the complexity of term rewrite systems, with a notion of resource process that mimics reaction keeping information about resource consumption in terms of communication. We also show that presented analysis is general and can be easily adapted to study space properties of processes (for example, upper bounds on the size of the maximal value sent on a given channel during reaction)

    Higher-order interpretations for higher-order complexity

    Get PDF
    International audienceWe design an interpretation-based theory of higher-order functions that is well-suited for the complexity analysis of a standard higher-order functional language Ă  la ml. We manage to express the interpretation of a given program in terms of a least fixpoint and we show that when restricted to functions bounded by higher-order polynomials, they characterize exactly classes of tractable functions known as Basic Feasible Functions at any order

    Quantum Programming with Inductive Datatypes: Causality and Affine Type Theory

    Full text link
    Inductive datatypes in programming languages allow users to define useful data structures such as natural numbers, lists, trees, and others. In this paper we show how inductive datatypes may be added to the quantum programming language QPL. We construct a sound categorical model for the language and by doing so we provide the first detailed semantic treatment of user-defined inductive datatypes in quantum programming. We also show our denotational interpretation is invariant with respect to big-step reduction, thereby establishing another novel result for quantum programming. Compared to classical programming, this property is considerably more difficult to prove and we demonstrate its usefulness by showing how it immediately implies computational adequacy at all types. To further cement our results, our semantics is entirely based on a physically natural model of von Neumann algebras, which are mathematical structures used by physicists to study quantum mechanics

    Types for controlling heap and stack in Java

    Get PDF
    International audienceA type system is introduced for a strict but expressive subset of Java in order to infer resource upper bounds on both the heap-space and the stack-space requirements of typed programs. This type system is inspired by previous works on Implicit Computational Complexity, using tiering and non-interference techniques. The presented methodology has several advantages. First, it provides explicit polynomial upper bounds to the programmer, hence avoiding OutOfMemory and StackOverFlow errors. Second, type checking is decidable in linear time. Last, it has a good expressivity as it analyzes most object oriented features like overload, inheritance, and also handles flow statements controlled by objects

    A characterization of polynomial complexity classes using dependency pairs

    Get PDF
    The dependency pair method has already shown its power in proving termination of term rewriting systems. We adapt this framework using polynomial assignments in order to characterize with two distinct criteria the set of the functions computable in polynomial time and the set of the functions computable in polynomial space. To our knowledge, this is a first attempt to capture complexity classes using of the dependency pair method. The characterizations presented are inspired by previous works on implicit computational complexity, and, particularly, by the notions of quasi-interpretation and sup-interpretation. Both criteria are decidable so that we can synthesize resource upper-bounds

    Complexity Information Flow in a Multi-threaded Imperative Language

    Get PDF
    International audienceWe propose a type system to analyze the time consumed by multi-threaded imperative programs with a shared global memory, which delineates a class of safe multi-threaded programs. We demon-strate that a safe multi-threaded program runs in polynomial time if (i) it is strongly terminating wrt a non-deterministic scheduling policy or (ii) it terminates wrt a deterministic and quiet scheduling policy. As a consequence, we also characterize the set of polynomial time functions. The type system presented is based on the fundamental notion of data tiering, which is central in implicit computational complexity. It regu-lates the information flow in a computation. This aspect is interesting in that the type system bears a resemblance to typed based informa-tion flow analysis and notions of non-interference. As far as we know, this is the first characterization by a type system of polynomial time multi-threaded programs
    • …
    corecore